Skip to content

Conversation

@qianjinqiu
Copy link

PR Category

Type of Change

Description

Issue

Progress

  • Change is properly reviewed (1 reviewer required, 2 recommended).
  • Change is responded to an issue.
  • Change is fully covered by a UT.

Performance

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @qianjinqiu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Ascend backend's capabilities by integrating a broad array of new operators and optimizing existing ones through Triton kernel implementations. The changes aim to improve computational efficiency and expand the functional scope of the backend, ensuring better support for diverse deep learning models. Additionally, the update includes refinements to performance heuristics and standardization of debug logging for improved maintainability and clarity.

Highlights

  • Expanded Operator Coverage: A significant number of new operators have been added to the Ascend backend, including rms_norm, cat, cumsum, diag, embedding, flip, full, gather, index, linspace, masked_fill, multinomial, ones, outer, polar, randperm, repeat_interleave, resolve_neg, select_scatter, sort, threshold, unique, var_mean, vector_norm, vstack, where, and zeros variants.
  • New Fused Operators: Three new fused operators, apply_rotary_pos_emb, fused_add_rms_norm, and skip_layer_norm, have been introduced to enhance performance by combining common operations into single kernels.
  • Triton Kernel Implementations: All newly added and many existing operators now leverage Triton kernels for optimized execution on Ascend hardware, demonstrating a shift towards more efficient low-level implementations.
  • Heuristics Configuration Updates: Performance heuristics for various operators like argmin, mm, rand, randn, uniform, and batch_norm have been refined, and num_warps and num_stages parameters have been removed from the tuning configurations for many operators, simplifying the tuning process.
  • Debug Logging Standardization: Debug log messages across several operators (e.g., cross_entropy_loss, addmm, all, amax, fill, max, min, mm, pow, triu) have been standardized to include 'GEMS_ASCEND' for clearer backend identification.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant number of new operators and fused kernels for the Ascend backend. While this is a great step towards feature parity, the implementation of several new Triton kernels contains critical correctness issues, such as using invalid syntax (Python loops inside JIT functions), which will prevent them from compiling. Additionally, there are multiple instances of severe performance anti-patterns, including launching kernels within loops and performing redundant multi-pass computations over the same data. I have also identified several maintainability concerns like dead code, non-English comments, and confusing logic. Addressing these issues is crucial for the stability, performance, and long-term health of the Ascend backend.

Comment on lines +121 to +145
if rotary_interleaved:
for d in range(0, BLOCK_D // 2):
dim_range_x = d * 2
dim_range_y = d * 2 + 1

rotary_embedding_rw_kernel(
state_out,
state,
cos,
sin,
stride_state_n,
stride_state_h,
stride_state_d,
stride_cos_n,
stride_cos_d,
num_tokens,
num_heads,
token_range,
head_range,
dim_range_x,
dim_range_y,
rotary_interleaved,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The implementation for the rotary_interleaved=True case in rotary_embedding_siso_kernel is a major performance bottleneck. It uses a Python for loop to launch a separate kernel (rotary_embedding_rw_kernel) for each pair of dimensions. Launching kernels in a loop is a significant anti-pattern in Triton that leads to very poor performance due to high launch overhead. This logic should be vectorized to happen within a single kernel launch, similar to how the non-interleaved case is handled.

@qianjinqiu qianjinqiu force-pushed the master branch 8 times, most recently from 6c8c939 to b34f31e Compare October 20, 2025 13:07
@qianjinqiu qianjinqiu force-pushed the master branch 2 times, most recently from b05cd75 to 493dfad Compare October 22, 2025 08:38
Copy link
Collaborator

@Galaxy1458 Galaxy1458 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGMT

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants